28 research outputs found

    Bayesian network classifiers for categorizing cortical gABAergic interneurons

    Full text link
    Abstract An accepted classification of GABAergic interneurons of the cerebral cortex is a major goal in neuroscience. A recently proposed taxonomy based on patterns of axonal arborization promises to be a pragmatic method for achieving this goal. It involves characterizing interneurons according to five axonal arborization features, called F1–F5, and classifying them into a set of predefined types, most of which are established in the literature. Unfortunately, there is little consensus among expert neuroscientists regarding the morphological definitions of some of the proposed types. While supervised classifiers were able to categorize the interneurons in accordance with experts’ assignments, their accuracy was limited because they were trained with disputed labels. Thus, here we automatically classify interneuron subsets with different label reliability thresholds (i.e., such that every cell’s label is backed by at least a certain (threshold) number of experts). We quantify the cells with parameters of axonal and dendritic morphologies and, in order to predict the type, also with axonal features F1–F4 provided by the experts. Using Bayesian network classifiers, we accurately characterize and classify the interneurons and identify useful predictor variables. In particular, we discriminate among reliable examples of common basket, horse-tail, large basket, and Martinotti cells with up to 89.52 % accuracy, and single out the number of branches at 180 µm from the soma, the convex hull 2D area, and axonal features F1–F4 as especially useful predictors for distinguishing among these types. These results open up new possibilities for an objective and pragmatic classification of interneurons

    Multi-dimensional classification of GABAergic interneurons with Bayesian network-modeled label uncertainty

    Full text link
    Abstract Interneuron classification is an important and long-debated topic in neuroscience. A recent study provided a data set of digitally reconstructed interneurons classified by 42 leading neuroscientists according to a pragmatic classification scheme composed of five categorical variables, namely, of the interneuron type and four features of axonal morphology. From this data set we now learned a model which can classify interneurons, on the basis of their axonal morphometric parameters, into these five descriptive variables simultaneously. Because of differences in opinion among the neuroscientists, especially regarding neuronal type, for many interneurons we lacked a unique, agreed-upon classification, which we could use to guide model learning. Instead, we guided model learning with a probability distribution over the neuronal type and the axonal features, obtained, for each interneuron, from the neuroscientists’ classification choices. We conveniently encoded such probability distributions with Bayesian networks, calling them label Bayesian networks (LBNs), and developed a method to predict them. This method predicts an LBN by forming a probabilistic consensus among the LBNs of the interneurons most similar to the one being classified. We used 18 axonal morphometric parameters as predictor variables, 13 of which we introduce in this paper as quantitative counterparts to the categorical axonal features. We were able to accurately predict interneuronal LBNs. Furthermore, when extracting crisp (i.e., non-probabilistic) predictions from the predicted LBNs, our method outperformed related work on interneuron classification. Our results indicate that our method is adequate for multi-dimensional classification of interneurons with probabilistic labels. Moreover, the introduced morphometric parameters are good predictors of interneuron type and the four features of axonal morphology and thus may serve as objective counterparts to the subjective, categorical axonal features

    Classifying GABAergic interneurons with semi-supervised projected model-based clustering

    Get PDF
    Objectives: A recently introduced pragmatic scheme promises to be a useful catalog of interneuron names.We sought to automatically classify digitally reconstructed interneuronal morphologies according tothis scheme. Simultaneously, we sought to discover possible subtypes of these types that might emergeduring automatic classification (clustering). We also investigated which morphometric properties weremost relevant for this classification.Materials and methods: A set of 118 digitally reconstructed interneuronal morphologies classified into thecommon basket (CB), horse-tail (HT), large basket (LB), and Martinotti (MA) interneuron types by 42 of theworld?s leading neuroscientists, quantified by five simple morphometric properties of the axon and fourof the dendrites. We labeled each neuron with the type most commonly assigned to it by the experts. Wethen removed this class information for each type separately, and applied semi-supervised clustering tothose cells (keeping the others? cluster membership fixed), to assess separation from other types and lookfor the formation of new groups (subtypes). We performed this same experiment unlabeling the cells oftwo types at a time, and of half the cells of a single type at a time. The clustering model is a finite mixtureof Gaussians which we adapted for the estimation of local (per-cluster) feature relevance. We performedthe described experiments on three different subsets of the data, formed according to how many expertsagreed on type membership: at least 18 experts (the full data set), at least 21 (73 neurons), and at least26 (47 neurons).Results: Interneurons with more reliable type labels were classified more accurately. We classified HTcells with 100% accuracy, MA cells with 73% accuracy, and CB and LB cells with 56% and 58% accuracy,respectively. We identified three subtypes of the MA type, one subtype of CB and LB types each, andno subtypes of HT (it was a single, homogeneous type). We got maximum (adapted) Silhouette widthand ARI values of 1, 0.83, 0.79, and 0.42, when unlabeling the HT, CB, LB, and MA types, respectively,confirming the quality of the formed cluster solutions. The subtypes identified when unlabeling a singletype also emerged when unlabeling two types at a time, confirming their validity. Axonal morphometricproperties were more relevant that dendritic ones, with the axonal polar histogram length in the [pi, 2pi) angle interval being particularly useful.Conclusions: The applied semi-supervised clustering method can accurately discriminate among CB, HT, LB, and MA interneuron types while discovering potential subtypes, and is therefore useful for neuronal classification. The discovery of potential subtypes suggests that some of these types are more heteroge-neous that previously thought. Finally, axonal variables seem to be more relevant than dendritic ones fordistinguishing among the CB, HT, LB, and MA interneuron types

    Prevalence, associated factors and outcomes of pressure injuries in adult intensive care unit patients: the DecubICUs study

    Get PDF
    Funder: European Society of Intensive Care Medicine; doi: http://dx.doi.org/10.13039/501100013347Funder: Flemish Society for Critical Care NursesAbstract: Purpose: Intensive care unit (ICU) patients are particularly susceptible to developing pressure injuries. Epidemiologic data is however unavailable. We aimed to provide an international picture of the extent of pressure injuries and factors associated with ICU-acquired pressure injuries in adult ICU patients. Methods: International 1-day point-prevalence study; follow-up for outcome assessment until hospital discharge (maximum 12 weeks). Factors associated with ICU-acquired pressure injury and hospital mortality were assessed by generalised linear mixed-effects regression analysis. Results: Data from 13,254 patients in 1117 ICUs (90 countries) revealed 6747 pressure injuries; 3997 (59.2%) were ICU-acquired. Overall prevalence was 26.6% (95% confidence interval [CI] 25.9–27.3). ICU-acquired prevalence was 16.2% (95% CI 15.6–16.8). Sacrum (37%) and heels (19.5%) were most affected. Factors independently associated with ICU-acquired pressure injuries were older age, male sex, being underweight, emergency surgery, higher Simplified Acute Physiology Score II, Braden score 3 days, comorbidities (chronic obstructive pulmonary disease, immunodeficiency), organ support (renal replacement, mechanical ventilation on ICU admission), and being in a low or lower-middle income-economy. Gradually increasing associations with mortality were identified for increasing severity of pressure injury: stage I (odds ratio [OR] 1.5; 95% CI 1.2–1.8), stage II (OR 1.6; 95% CI 1.4–1.9), and stage III or worse (OR 2.8; 95% CI 2.3–3.3). Conclusion: Pressure injuries are common in adult ICU patients. ICU-acquired pressure injuries are associated with mainly intrinsic factors and mortality. Optimal care standards, increased awareness, appropriate resource allocation, and further research into optimal prevention are pivotal to tackle this important patient safety threat

    Contributions to Bayesian network classifiers and interneuron classification

    Full text link
    El aprendizaje automático trata de la extracción automática de patrones a partir de datos. Su formato más común es la clasificación supervisada, donde aprendemos un modelo predictivo a partir de ejemplos etiquetados como miembros de una de las varias clases posibles. Un tipo de modelo, basado en el modelado de la distribución de probabilidad sobre los ejemplos y las clases, son los clasificadores basados en redes Bayesianas. La mayoría de los algoritmos de búsqueda y puntuación para aprender clasificadores basados en redes Bayesianas recorren el espacio de grafos dirigidos acíclicos (DAGs), tomando arbitrariamente decisiones posiblemente suboptimas sobre la direccionalidad de arcos. Esto puede ser evitado mediante el aprendizaje dentro del espacio de clases de equivalencia de DAGs pero, sin embargo, esto no se ha aplicado ampliamente para clasificadores basados en redes Bayesianas. Primero, identificamos el subespacio de DAGs mínimo que cubre todas las posibles distribuciones a posteriori de la clase cuando los datos son completos. Los DAGs en este espacio, al que llamamos el de los DAGs enfocados en la clase mínimos (MC-DAGs), tienen todos los arcos dirigidos hacia un nodo que es hijo del nodo clase. Segundo, adaptamos el algoritmo greedy equivalence search (GES) añadiendo criterios de validez de operadores de búsqueda que aseguran que el GES adaptado solamente visite estados que están dentro de nuestro espacio de búsqueda. Tercero, especificamos cómo evaluar de manera eficiente puntuaciones discriminativas de operadores del GES para los MC-DAGs en tiempo que es independiente del número de las variables y sin convertir el DAG parcialmente dirigido completado, que representa a una clase de equivalencia, a un DAG. El GES adaptado ha mostrado buenos resultados en experimentos preliminares. Pocos de los métodos propuestos para el aprendizaje de clasificadores basados en redes Bayesianas están disponibles en herramientas de software populares. Algunos métodos prometedores solamente están implementados como programas independientes, comúnmente poco documentados, mientras que para otros no se tienen implementaciones públicamente disponibles. Hemos implementado el paquete bnclassify, para el lenguaje y entorno para el análisis estadístico R, que provee una interfaz y documentación unificada para varios tales métodos. Entre los algoritmos para el aprendizaje de estructura es incluyen variantes de la búsqueda voráz hill climbing, que se pueden combinar con puntuaciones discriminativas, una adaptación del algoritmo Chow-Liu y el averaged one-dependence estimator. La estimación de parámetros incluye métodos específicos para el naive Bayes, como los basados en optimización de puntuaciones discriminativas y en el Bayesian model averaging. Un problema importante dentro de la neurociencia es la determinación de los tipos de interneuronas GABAergicas. Estas neuronas son muy diversas con respecto a propiedades morfológicas, electro-fisiológicas, moleculares, y sinapticas. La mayoría de los investigadores consideran que las interneuronas se pueden agrupar en tipos, con mucha menos variedad dentro de los tipos que entre ellos. Sin embargo, dentro de los muchos tipos morfológicos establecidos en la literatura, solamente los tipos candelabro y Martinotti tienen definiciones ampliamente aceptadas entre los neurocientíficos. Algunos autores consideran que un enfoque completamente dirigido por datos resolverá este problema mediante una clasificación objetiva a través del clustering de neuronas tomando en cuenta sus propiedades morfológicas, electrofisiológicas, y moleculares. Actualmente, sin embargo, los datos existentes no son suficientes para ello, y casi siempre están limitados a una de las tres dimensiones - morfológicas, electrofisiológicas, y moleculares - y no a una combinación de varias. Hemos aprendido clasificadores supervisados a partir de datos morfológicos, pre-clasificados en tipos establecidos de interneuronas. Estábamos interesados en saber si estos tipos, algunos de los cuales no tienen definiciones ampliamente aceptadas, podrían ser distinguidos por un modelo cuantitativo. También, un modelo de alto porcentaje de clasificación correcta e interpretable podría permitir una mejor comprensión de esos tipos. Utilizar conocimiento previo, mediante las etiquetas de clase, nos puede permitir extraer conocimiento a datos limitados, con los cuales una clasificación completamente no-supervisada probablemente no es viable. Hemos usado dos conjuntos de reconstrucciones de morfologías neuronales. El primero, el conjunto del Jardinaro, consistía en 237 células clasificadas por 42 neurocientíficos punteros en diez tipos de interneuronas. El nivel de acuerdo entre los neurocientíficos variaba mucho, habiendo 29 células en las cuales al menos 35 neurocientíficos habían coincidido en el tipo, pero también 67 tales que no más de 15 neurocientíficos habían coincidido en un tipo de interneurona. Los neurocientíficos también clasificaron las células de acuerdo a cuatro características categóricas de morfología axonal y en gran medida sí coincidían en los valores para éstas. Aprender a reproducir la clasificación de interneuronas hecha por este grupo representativo de neurocientíficos podría proveer modelos objetivos, basados en el consenso, del tipo de interneuronas y las características axonales. Primero, hemos aprendido modelos independientes con clasificadores Bayesianos discretos para el tipo interneuronal y las cuatro características axonales, etiquetando cada célula con su voto mayoritario entre los 42 neurocientíficos. Hemos considerado múltiples subconjuntos de neuronas, formados al incrementar el umbral de fiabilidad de la etiqueta, definida como el número mínimo de neurocientíficos que han coincidido en la etiqueta mayoritaria. Segundo, hemos considerado etiquetas de clase probabilísticas prediciendo las cinco variables clase a la vez. Para ello, hemos propuesto un clasificador instance-based que puede tratar con etiquetas probabilísticas codificadas como redes Bayesianas. El clasificador predice las etiquetas probabilísticas multi-dimensionales formado un consenso entre un conjunto de redes Bayesianas. Tercero, hemos considerado un enfoque más descriptivo que predictivo, a través del clustering proyectado semi-supervisado. 1) desetiquetamos algunas de las instancias; 2) inicializamos un cluster para cada tipo, asignándole todos las células del dicho tipo; 3) hacemos clustering de las células desetiquetadas, un clusters correspondientes a los tipos existentes o clusters nuevos. Con ello buscábamos potenciales subtipos de los tipos establecidos, a la vez que identificábamos variables localmente relevantes para los clusters. Para ello, hemos adaptado un modelo de mixtura de Gausianas con selección de variables localizada, simplificando la definición de irrelevancia de una variable. El segundo conjunto, de Markram, consistía en 217 reconstrucciones de interneuronas de ratas del laboratorio Markram. Los autores de las reconstrucciones las habían clasificado en uno de ocho tipos morfológicos, por lo cual cada célula tenía una etiqueta de clase única. Hemos entrenado un clasificador para cada tipo, del modo uno contra todos, aplicando clasificadores supervisados punteros. Con siete neuronas candelabro y 15 neuronas bitufted —pero 123 células basket—, la muestra no era suficiente para modelos con alto porcentaje de acierto para todos los tipos. Sin embargo, hemos aprendido modelos interpretables con alto porcentaje de acierto para los tipos Martinotti, basket, y nest basket, y con un porcentaje de acierto mediano para los tipos double boquet y small basket. ----------ABSTRACT---------- Machine learning is concerned with the automatic extraction of patterns from data. Its most common form is supervised classification, where we learn a predictive model from examples labeled as belonging to one out of a number of possible classes. One type of model, based on encoding the probability distribution over the examples and the classes, are Bayesian network classifiers. Most search and score algorithms for learning Bayesian network classifiers traverse the space of directed acyclic graphs (DAGs), making arbitrary yet possibly suboptimal arc directionality decisions. This can be remedied by learning in the space of DAG equivalence classes. This is commonly done for general Bayesian networks but has not been widely applied for Bayesian network classifiers. First, we identify the smallest subspace of DAGs that covers all possible class-posterior distributions when data is complete. All the DAGs in this space, which we call minimal class-focused DAGs (MC-DAGs), are such that their every arc is directed towards a child of the class variable. Second, we adapt the greedy equivalence search (GES) by adding operator validity criteria which ensure GES only visits states within our space. Third, we specify how to efficiently evaluate the discriminative score of a GES operator for an MC-DAG in time independent of the number of variables and without converting the completed partially DAG, which represents an equivalence class, into a DAG. Our adapted algorithm has shown promising results on preliminary experiments. Few of the many proposed methods for learning Bayesian network classifiers are available in popular software tools. Some promising methods are only implemented as standalone, often scarcely documented, programs, while many others lack available implementations. We have implemented the bnclassify package for the R environment for statistical computing that provides a unified interface and documentation to a number of such algorithms. Structure learning algorithms include variants of the greedy hill-climbing search that can be combined with discriminative scores, an adaptation of the Chow-Liu algorithm, and the averaged onedependence estimator. Parameter estimation includes naive-Bayes-specific methods based on discriminative score optimization and Bayesian model averaging. A major problem in neuroscience is to determine the types of GABAergic interneurons. These neurons are very diverse with regards to morphological, electro-physiological, molecular, and synaptic properties. Most researchers consider that interneurons can be grouped into types with much less variability within types than among them. Yet, among the many morphological types established in the literature, only the chandelier and Martinotti types have widely accepted definitions. Some authors expect that a fully data-driven approach will solve this with an objective classification by clustering neurons according to their molecular, morphological, and electrophysiological features. Currently, however, the data are too scarce for this and are almost always limited to either morphological, electro-physiological, or molecular features. We learned supervised classifiers from neuron morphologies, pre-classified into some of the established interneuron types. We were interested in whether these types, some of which lack widely accepted definitions, could be distinguished by quantitative models. Also, interpretable and accurate models could allow us to better understand these types. Leveraging prior knowledge on existing types, via class labels, may allow us to extract insight from the existing scarce data, which is likely insufficient for a fully unsupervised classification. We used two sets of neuron morphology reconstructions. The first, Gardener’s, set consisted of 237 cells which were classified by 42 leading neuroscientists into ten interneuron types. The degree of agreement among neuroscientists varied widely, with 29 cells such that at least 35 neuroscientists agreed on its type, yet 67 cells such that no more than 15 agreed on a type. The neuroscientists also also classified the cells according to four categorical features of axonal morphology, largely agreeing on their definitions. Learning to reproduce the classification of this representative group of neuroscientists could provide objective, consensus-based, models of interneuron type and features. We first learned separate Bayesian network classifier models for the type and four axonal features, labelling each cell by its majority vote among the 42 neuroscientists. We did this with different subsets of neurons, formed by increasing the threshold on label reliability, which we defined as the minimal number of neuroscientists agreeing on the majority type. Second, we used probabilistic class labels while predicting the five variables at once. For this end, we proposed an instance-based classifier that handles instances with probabilistic labels encoded as Bayesian networks. This classifier predicts the multi-dimensional probabilistic labels by forming a consensus among a set of Bayesian networks. Third, we considered a descriptive, rather than predictive, approach, by semisupervised projected clustering of the data. We 1) unlabelled some of instances; 2) initialized a cluster for each type with the cells of that type; and 3) clustered the unlabeled cells, into either the clusters of known types or newly created clusters. We sought potential subtypes of the established types, while estimating localized feature relevance for the types/subtypes. For this, we adapted a mixture of Gaussians with localized feature selection by simplifying its definition of feature irrelevance. The second, Markram’s, set of reconstructions consisted of 217 of rat interneurons from the Markram laboratory, pre-classified into one of seven morphological types, with a single class label per cell. We trained classifiers for each type in a one-versus-all fashion by applying stateof- the-art learning algorithms. With seven chandelier and 15 bitufted —yet 123 basket— cells, the sample was insufficient to accurately distinguish each of the seven types. We were able to, however, learn accurate and interpretable models for the Martinotti, basket, and nest basket types, and moderately accurate models for the double boquet and small basket types

    Learning Bayesian network classifiers with completed partially directed acyclic graphs

    No full text
    Most search and score algorithms for learning Bayesian network classifiers from data traverse the space of directed acyclic graphs (DAGs), making arbitrary yet possibly suboptimal arc directionality decisions. This can be remedied by learning in the space of DAG equivalence classes. We provide a number of contributions to existing work along this line. First, we identify the smallest subspace of DAGs that covers all possible class-posterior distributions when data is complete. All the DAGs in this space, which we call minimal class-focused DAGs (MC-DAGs), are such that their every arc is directed towards a child of the class variable. Second, in order to traverse the equivalence classes of MC-DAGs, we adapt the greedy equivalence search (GES) by adding operator validity criteria which ensure GES only visits states within our space. Third, we specify how to efficiently evaluate the discriminative score of a GES operator for MC-DAG in time independent of the number of variables and without converting the completed partially DAG, which represents an equivalence class, into a DAG. The adapted GES perfomed well on real-world data sets

    Comparing the Electrophysiology and Morphology of Human and Mouse Layer 2/3 Pyramidal Neurons With Bayesian Networks

    Full text link
    Pyramidal neurons are the most common neurons in the cerebral cortex. Understanding how they differ between species is a key challenge in neuroscience. We compared human temporal cortex and mouse visual cortex pyramidal neurons from the Allen Cell Types Database in terms of their electrophysiology and dendritic morphology. We found that, among other differences, human pyramidal neurons had a higher action potential threshold voltage, a lower input resistance, and larger dendritic arbors. We learned Gaussian Bayesian networks from the data in order to identify correlations and conditional independencies between the variables and compare them between the species. We found strong correlations between electrophysiological and morphological variables in both species. In human cells, electrophysiological variables were correlated even with morphological variables that are not directly related to dendritic arbor size or diameter, such as mean bifurcation angle and mean branch tortuosity. Cortical depth was correlated with both electrophysiological and morphological variables in both species, and its effect on electrophysiology could not be explained in terms of the morphological variables. For some variables, the effect of cortical depth was opposite in the two species. Overall, the correlations among the variables differed strikingly between human and mouse neurons. Besides identifying correlations and conditional independencies, the learned Bayesian networks might be useful for probabilistic reasoning regarding the morphology and electrophysiology of pyramidal neurons

    bnclassify: Learning Bayesian Network Classifiers

    Full text link
    The bnclassify package provides state-of-the art algorithms for learning Bayesian network classifiers from data. For structure learning it provides variants of the greedy hill-climbing search, a well-known adaptation of the Chow-Liu algorithm and averaged one-dependence estimators. It provides Bayesian and maximum likelihood parameter estimation, as well as three naive-Bayes-specific methods based on discriminative score optimization and Bayesian model averaging. The implementation is efficient enough to allow for time-consuming discriminative scores on medium-sized data sets. bnclassify provides utilities for model evaluation, such as cross-validated accuracy and penalized log-likelihood scores, and analysis of the underlying networks, including network plotting via the Rgraphviz package. It is extensively tested, with over 200 automated tests that give a code coverage of 94%. Here we present the main functionalities, illustrate them with a number of data sets, and comment on related software
    corecore